Goto

Collaborating Authors

 blake lemoine


Reflecting On "Artificial General Intelligence" And AI Sentience

#artificialintelligence

Intelligence comes in many forms. Octopuses are highly intelligent--and completely unlike humans. In case you haven't noticed, artificial intelligence systems have been behaving in increasingly astonishing ways lately. OpenAI's new model DALL-E 2, for instance, can produce captivating original images based on simple text prompts. Models like DALL-E are making it harder to dismiss the notion that AI is capable of creativity. Consider, for instance, DALL-E's imaginative rendition of "a hip-hop cow in a denim jacket recording a hit single in the studio."


Stanford AI experts call BS on claims that Google's LaMDA is sentient

#artificialintelligence

Two Stanford heavyweights have weighed in on the fiery AI sentience debate -- and the duo is firmly in the "BS" corner. The wrangle recently rose to a crescendo over arguments about Google's LaMDA system. Developer Blake Lemoine sparked the controversy. Lemoine, who worked for Google's Responsible AI team, had been testing whether the large-language model (LLM) used harmful speech. The 41-year-old told The Washington Post that his conversations with the AI convinced him that it had a sentient mind.


AI Created?

#artificialintelligence

A strange news story appeared a while ago about a computer engineer who had just been sacked by Google. My first reaction was anger. Google has dismissed other people unfairly before, such as James Damore. However, it was not long before I learned that this was a totally different situation. In this case the engineer, Blake Lemoine, was not fired for saying something politically incorrect; he had broken the company's confidentiality rules.


Irony machine: why are AI researchers teaching computers to recognise irony?

#artificialintelligence

What was your first reaction when you heard about Blake Lemoine, the Google engineer who announced last month the AI program he was working on had developed consciousness? If, like me, you're instinctively suspicious, it might have been something like: Is this guy serious? Does he honestly believe what he is saying? Or is this an elaborate hoax? Put the answers to those questions to one side.


On Artificial General Intelligence, AI Sentience, And Large Language Models

#artificialintelligence

Many forms of intelligence exist. Octopuses are highly intelligent--and completely unlike humans. In case you haven't noticed, artificial intelligence systems have been behaving in increasingly astonishing ways lately. OpenAI's new model DALL-E 2, for instance, can produce captivating original images based on simple text prompts. Models like DALL-E are making it harder to dismiss the notion that AI is capable of creativity. Consider, for instance, DALL-E's imaginative rendition of "a hip-hop cow in a denim jacket recording a hit single in the studio."


A Google engineer believed he found an AI bot that was sentient. It cost him his job.

#artificialintelligence

Blake Lemoine, an engineer who claimed an AI bot was sentient, was fired from Google. "We wish Blake well," a spokesperson for Google told the Washington Post. Experts told Insider it is very unlikely the chatbot is sentient. The engineer who claimed a chatbot gained sentience was fired from Google on Friday, both he and the tech giant confirmed. Blake Lemoine sparked controversy after publishing a paper about his conversations with the Google artificial intelligence chatbot LaMDA, which led him to believe the bot had a mind of its own.


Google fires engineer who claimed its AI chatbot LaMDA is sentient

#artificialintelligence

Google has fired a software engineer who claimed that the search engine giant's artificial intelligence (AI) system – (LaMDA) – short for "Language Model for Dialogue Applications", had become sentient and began reasoning like a human. Blake Lemoine was last month put on administrative leave. "LaMDA is a sweet kid who just wants to help the world be a better place for all of us. Please take care of it well in my absence," wrote Lemoine in a mail he sent to members of his company. Also read: Is Google's LaMDA sentient?


Blake Lemoine: Google fires engineer who said AI tech has feelings

BBC News

In its statement, Google said it takes the responsible development of AI "very seriously" and published a report detailing this. It added that any employee concerns about the company's technology are reviewed "extensively", and that Lamda has been through 11 reviews.


The World May Have Its First Sentient AI

#artificialintelligence

The World may now have the first sentient AI chatbot called LaMDA (short for Language Model for Dialogue Applications). After listening to an interesting discussion on YouTube between Blake Lemoine and Dr James Cooke, I feel Blake has a compelling point of view. Jump to 19:22 in the video to listen to how Blake came to feel that LaMDA may be sentient. Blake Lemoine is an AI Researcher who works for Google's Responsible AI organization. Blake's opinion about LaMDA is controversial among the AI community.


Stop debating whether AI is 'sentient' -- the question is if we can trust it

#artificialintelligence

The past month has seen a frenzy of articles, interviews, and other types of media coverage about Blake Lemoine, a Google engineer who told The Washington Post that LaMDA, a large language model created for conversations with users, is "sentient." After reading a dozen different takes on the topic, I have to say that the media has become (a bit) disillusioned with the hype surrounding current AI technology. A lot of the articles discussed why deep neural networks are not "sentient" or "conscious." This is an improvement in comparison to a few years ago, when news outlets were creating sensational stories about AI systems inventing their own language, taking over every job, and accelerating toward artificial general intelligence. But the fact that we're discussing sentience and consciousness again underlines an important point: We are at a point where our AI systems--namely large language models--are becoming increasingly convincing while still suffering from fundamental flaws that have been pointed out by scientists on different occasions.